-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add general latency metrics #913
Conversation
Problem: - Soooooooooo many teams have specific latency metric fields and many are asking to add them for re-invent Solution: - De-dup a lot of them e.g. amazonqGenerateApproachLatency, amazonqGenerateCodeResponseLatency, amazonqEndOfTheConversationLatency, etc
@@ -209,6 +209,11 @@ | |||
], | |||
"description": "User inputted check type to denote which custom check to run." | |||
}, | |||
{ | |||
"name": "clientLatency", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since teams have indicated that it's hard to find related metrics, should we deviate a bit from our normal pattern and use latency
as a prefix for all of this? Or maybe even perf
as a prefix would help group all perf-related fields in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think things like latency and duration are already naturally searchable. What advantages would we get if we prefixed with latency or perf? My problem with telemetry right now is every time I search for something that could be general it starts with the prefix of a team/project name 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they will be grouped together
It might be worth illustrating in the PR description how some existing metrics would be adjusted to use these proposed fields. This also helps to weed out if there are metrics where your proposal wouldn't be suitable. Other than that, I like this idea. |
Problem
Solution
Other notes
amazonq_approachInvoke would then have perfServerLatency instead of an individual amazonqGenerateApproachLatency field
amazonq_codeGenerationInvoke would then have perfServerLatency instead of an individual amazonqGenerateCodeResponseLatency field
amazonq_endChat would then have perfE2ELatency instead of an individual amazonqEndOfTheConversationLatency field
In the future we can re-use the perfClientLatency metric for tracking the duration of different events that only happen in the client side (e.g. how long the client latency takes in the amazon q chat flow)
License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.